# Small Parameter Efficiency
Openrs3 GRPO Ja
OpenRS3-GRPO-ja is a fine-tuned version of the SakanaAI/TinySwallow-1.5B-Instruct model on a Japanese mathematical instruction dataset, trained using the GRPO method, focusing on mathematical reasoning tasks.
Large Language Model
Transformers

O
EQUES
25
3
Teacher Persona GGUF
Qwen2-1.5B-Instruct is a 1.5 billion parameter instruction fine-tuned large language model released by Alibaba Cloud, suitable for Q&A and dialogue tasks.
Large Language Model
T
RyZhangHason
24
1
Cuckoo C4
MIT
Cuckoo is a small (300M parameters) information extraction model that efficiently extracts information by mimicking the next-word prediction paradigm of large language models
Large Language Model
Transformers

C
KomeijiForce
15
1
Lava Phi
MIT
A vision-language model based on Microsoft's Phi-1.5 architecture, combined with CLIP for image processing capabilities
Image-to-Text
Transformers Supports Multiple Languages

L
sagar007
17
0
Llava Phi 3 Mini Hf
LLaVA model fine-tuned based on Phi-3-mini-4k-instruct and CLIP-ViT-Large-patch14-336, supporting image-to-text tasks
Image-to-Text
Transformers

L
xtuner
2,322
49
Phi 2 Sft Ultrachat Full
MIT
A large language model fine-tuned on the ultrachat_200k dataset based on microsoft/phi-2, suitable for dialogue generation tasks.
Large Language Model
Transformers Other

P
lole25
68
2
Open Llama 3b V2 Wizard Evol Instuct V2 196k AWQ
Apache-2.0
This is a model based on the Open Llama 3B V2 architecture, trained using the WizardLM_evol_instruct_V2_196k dataset, suitable for instruction-following tasks.
Large Language Model
Transformers English

O
TheBloke
64
1
Lamini T5 61M
LaMini-T5-61M is an instruction-following model based on the T5-small architecture, fine-tuned on the LaMini-instruction dataset with a parameter scale of 61M.
Large Language Model
Transformers English

L
MBZUAI
1,287
18
Featured Recommended AI Models